21 research outputs found

    Orchestration from the cloud to the edge

    Get PDF
    The effective management of complex and heterogeneous computing environments is one of the biggest challenges that service and infrastructure providers are facing in the Cloud-to-Thing continuum era. Advanced orchestration systems are required to support the resource management of large-scale cloud data centres integrated into big data generation of IoT devices. The orchestration system should be aware of all available resources and their current status in order to perform dynamic allocations and enable short time deployment of applications. This chapter will review the state of the art with regards to orchestration along the Cloud-to-Thing continuum with a specific emphasis on container-based orchestration (e.g. Docker Swarm and Kubernetes) and fog-specific orchestration architectures (e.g. SORTS, SOAFI, ETSI IGS MEC, and CONCERT)

    Modelling and simulation of ElasticSearch using CloudSim

    Get PDF
    Simulation can be a powerful technique for evaluating the performance of large-scale cloud computing services in a relatively low cost, low risk and time-sensitive manner. Largescale data indexing, distribution and management is complex to analyse in a timely manner. In this paper, we extend the CloudSim cloud simulation framework to model and simulate a distributed search engine architecture and its workload characteristics. To test the simulation framework, we develop a model based on a real-world ElasticSearch deployment on Linknovate.com. An experimental evaluation of the framework, comparing simulated and actual query response time, precision and resource utilisation, suggests that the proposed framework is capable of predicting performance at different scales in a precise, accurate and efficient manner. The results can assist ElasticSearch users to manage their scalability and infrastructure requirement

    Simulating fog and edge computing scenarios: an overview and research challenges

    Get PDF
    The fourth industrial revolution heralds a paradigm shift in how people, processes, things, data and networks communicate and connect with each other. Conventional computing infrastructures are struggling to satisfy dramatic growth in demand from a deluge of connected heterogeneous endpoints located at the edge of networks while, at the same time, meeting quality of service levels. The complexity of computing at the edge makes it increasingly difficult for infrastructure providers to plan for and provision resources to meet this demand. While simulation frameworks are used extensively in the modelling of cloud computing environments in order to test and validate technical solutions, they are at a nascent stage of development and adoption for fog and edge computing. This paper provides an overview of challenges posed by fog and edge computing in relation to simulation

    Analysing dependability and performance of a real-world Elastic Search application

    Get PDF
    —Increased complexity in IT, big data, and advanced analytical techniques are some of the trends driving demand for more sophisticated and scalable search technology. Despite Quality of Service (QoS) being a critical success factor in most enterprise software service offerings, it is often not a generic component of the enterprise search software stack. In this paper, we explore enterprise search engine dependability and performance using a real-world company architecture and associated data sourced from an ElasticSearch implementation on Linknovate.com. We propose a Fault Tree model to assess the availability and reliability of the Linknovate.com architecture. The results of the Fault Tree model are fed into a Stochastic Petri Net (SPN) model to analyze how failures and redundancy impact application performance of the use case system. Availability and MTTF were used to evaluate the reliability and throughput was used to evaluate the performance of the target system. The best results for all three metrics were returned in scenarios with high levels of redundancy

    A preliminary systematic review of computer science literature on cloud computing research using open source simulation platforms.

    Get PDF
    Research and experimentation on live hyperscale clouds is limited by their scale, complexity, value and and issues of commercial sensitivity. As a result, there has been an increase in the development, adaptation and extension of cloud simulation platforms for cloud computing to enable enterprises, application developers and researchers to undertake both testing and experimentation. While there have been numerous surveys of cloud simulation platforms and their features, few surveys examine how these cloud simulation platforms are being used for research purposes. This paper provides a preliminary systematic review of literature on this topic covering 256 papers from 2009 to 2016. The paper aims to provide insights into the current status of cloud computing research using open source cloud simulation platforms. Our two-level analysis scheme includes a descriptive and synthetic analysis against a highly cited taxonomy of cloud computing. The analysis uncovers some imbalances in research and the need for a more granular and refined taxonomy against which to classify cloud computing research using simulators. The paper can be used to guide literature reviews in the area and identifies potential research opportunities for cloud computing and simulation researchers, complementing extant surveys on cloud simulation platforms

    Towards simulation and optimization of cache placement on large virtual Content Distribution Networks

    Get PDF
    IP video traffic is forecast to be 82% of all IP traffic by 2022. Traditionally, Content Distribution Networks (CDN) were used extensively to meet the quality of service levels for IP video services. To handle the dramatic growth in video traffic, CDN operators are migrating their infrastructure to the cloud and fog in order to leverage its greater availability and flexibility. For hyper-scale deployments, energy consumption, cache placement, and resource availability can be analyzed using simulation in order to improve resource utilization and performance. Recently, a discrete-time simulator for modelling hierarchical virtual CDNs (vCDNs) was proposed with reduced memory requirements and increased performance using multi-core systems to cater to the scale and complexity of these networks. The first iteration of this discrete-time simulator featured a number of limitations impacting accuracy and applicability: it supports only tree-based topology structures, the results are computed per level, and requests of the same content differ only in time duration. In this paper, we present an improved simulation framework that (a) supports graph-based network topologies, (b) requests have been reconstituted for differentiation of requirements, and (c) statistics are now computed per site and network metrics per link, improving the granularity and parallel performance. Moreover, we also propose a two-phase optimization scheme that makes use of simulation outputs to guide the search for optimal cache placements. In order to evaluate our proposal, we simulate a vCDN network based on real traces obtained from the BT vCDN infrastructure and analyze performance and scalability aspects

    Cost aware decision support for cloud data centres

    No full text
    Cloud computing became a popular IT and business trend in recent years, due to its scalability, reliability, ease of access and low costs. The demand for cloud services has led to a steep increase in number and size of data centres around the world with more hardware dense, large-scale data centres being build. A modern data centre has become a very complex system with a multitude of interconnected hardware resources that is hard to manage in an autonomic and cost-effective manner. This thesis presents novel techniques, models and software for estimating the impact of data centre planning decisions on Total Cost of Ownership. The use of a Discrete Event Simulation (DES) framework as a decision support tool is proposed for understanding operational efficiencies and the financial implications of different cloud resource management techniques and different hardware components. Based on a detailed assessment of the current state of cloud simulation tools, resource management techniques and cost models available for data centre management, an on-the-fly compilation of simulation models for large-scale cloud data centres has been proposed and delivered. This enables an automated approach for extracting all required simulation modelling parameters from existing data centre monitoring tools, improving speed and usability of the simulation approach. A unique integration approach between a production grade cloud optimisation framework and an offline simulation analysis framework, enables direct simulation of cloud resource management policies allowing for experimentation of different “what-if” scenarios using existing production tooling. An improved set of cost models were developed alongside the simulation framework which are capable of calculating costs based on time series data produced by the simulation framework. Thus, taking into account resource management policy effects and enabling a more granular overview of costs when compared to traditional TCO calculation approaches. The simulation framework implementation was validated using real case study data. In addition, the practical use of the proposed cloud data centre decision support approach is demonstrated through presentation of an extended case example, estimating and planning for the impact of a data centre hardware upgrade and resultant system performance and costs

    Cost aware decision support for cloud data centres

    No full text
    Cloud computing became a popular IT and business trend in recent years, due to its scalability, reliability, ease of access and low costs. The demand for cloud services has led to a steep increase in number and size of data centres around the world with more hardware dense, large-scale data centres being build. A modern data centre has become a very complex system with a multitude of interconnected hardware resources that is hard to manage in an autonomic and cost-effective manner. This thesis presents novel techniques, models and software for estimating the impact of data centre planning decisions on Total Cost of Ownership. The use of a Discrete Event Simulation (DES) framework as a decision support tool is proposed for understanding operational efficiencies and the financial implications of different cloud resource management techniques and different hardware components. Based on a detailed assessment of the current state of cloud simulation tools, resource management techniques and cost models available for data centre management, an on-the-fly compilation of simulation models for large-scale cloud data centres has been proposed and delivered. This enables an automated approach for extracting all required simulation modelling parameters from existing data centre monitoring tools, improving speed and usability of the simulation approach. A unique integration approach between a production grade cloud optimisation framework and an offline simulation analysis framework, enables direct simulation of cloud resource management policies allowing for experimentation of different “what-if” scenarios using existing production tooling. An improved set of cost models were developed alongside the simulation framework which are capable of calculating costs based on time series data produced by the simulation framework. Thus, taking into account resource management policy effects and enabling a more granular overview of costs when compared to traditional TCO calculation approaches. The simulation framework implementation was validated using real case study data. In addition, the practical use of the proposed cloud data centre decision support approach is demonstrated through presentation of an extended case example, estimating and planning for the impact of a data centre hardware upgrade and resultant system performance and costs

    Simulating resource management across the cloud-to-thing continuum: a survey and future directions

    Get PDF
    In recent years, there has been significant advancement in resource management mechanisms for cloud~computing infrastructure performance in terms of cost, quality of service (QoS) and energy consumption. The emergence of the Internet of Things has led to the development of infrastructure that extends beyond centralised data centers from the cloud to the edge, the so-called cloud-to-thing continuum (C2T). This infrastructure is characterised by extreme heterogeneity, geographic distribution, and complexity, where the key performance indicators (KPIs) for the traditional model of cloud~computing may no longer apply in the same way. Existing resource management mechanisms may not be suitable for such complex environments and therefore require thorough testing, validation and evaluation before even being considered for live system implementation. Similarly, previously discounted resource management proposals may be more relevant and worthy of revisiting. Simulation is a widely used technique in the development and evaluation of resource management mechanisms for cloud~computing but is a relatively nascent research area for new C2T~computing paradigms such as fog and edge~computing. We present a methodical literature analysis of C2T resource management research using simulation software tools to assist researchers in identifying suitable methods, algorithms, and simulation approaches for future research. We analyse 35 research articles from a total collection of 317 journal articles published from January 2009 to March 2019. We present our descriptive and synthetic analysis from a variety of perspectives including resource management, C2T layer, and simulation

    Simulating and evaluating a real-world elastic search system using the RECAP DES simulator

    No full text
    Simulation has become an indispensable technique for modelling and evaluating the performance of large-scale systems efficiently and at a relatively low cost. ElasticSearch (ES) is one of the most popular open-source large-scale distributed data indexing systems worldwide. In this paper, we use the RECAP Discrete-Event Simulator (DES) simulator, an extension of CloudSimPlus, to model and evaluate the performance of a real-world cloud-based ES deployment by an Irish small and medium-sized enterprise (SME), Opening.io. Following simulation experiments that explored how much query traffic the existing Opening.io architecture could cater for before performance degradation, a revised architecture was proposed, adding a new virtual machine in order to dissolve the bottleneck. The simulation results suggest that the proposed improved architecture can handle significantly larger query traffic (about 71% more) than the current architecture used by Opening.io. The results also suggest that the RECAP DES simulator is suitable for simulating ES systems and can help companies to understand their infrastructure bottlenecks under various traffic scenarios and inform optimisation and scalability decisions
    corecore